21 research outputs found

    Robotic cloth manipulation for clothing assistance task using Dynamic Movement Primitives

    Get PDF
    The need of robotic clothing assistance in the field of assistive robotics is growing, as it is one of the most basic and essential assistance activities in daily life of elderly and disabled people. In this study we are investigating the applicability of using Dynamic Movement Primitives (DMP) as a task parameterization model for performing clothing assistance task. Robotic cloth manipulation task deals with putting a clothing article on both the arms. Robot trajectory varies significantly for various postures and also there can be various failure scenarios while doing cooperative manipulation with non-rigid and highly deformable clothing article. We have performed experiments on soft mannequin instead of human. Result shows that DMPs are able to generalize movement trajectory for modified posture.3rd International Conference of Robotics Society of India (AIR \u2717: Advances in Robotics), June 28 - July 2, 2017, New Delhi, Indi

    Bayesian Nonparametric Learning of Cloth Models for Real-time State Estimation

    Get PDF
    Robotic solutions to clothing assistance can significantly improve quality of life for the elderly and disabled. Real-time estimation of the human-cloth relationship is crucial for efficient learning of motor skills for robotic clothing assistance. The major challenge involved is cloth-state estimation due to inherent nonrigidity and occlusion. In this study, we present a novel framework for real-time estimation of the cloth state using a low-cost depth sensor, making it suitable for a feasible social implementation. The framework relies on the hypothesis that clothing articles are constrained to a low-dimensional latent manifold during clothing tasks. We propose the use of manifold relevance determination (MRD) to learn an offline cloth model that can be used to perform informed cloth-state estimation in real time. The cloth model is trained using observations from a motion capture system and depth sensor. MRD provides a principled probabilistic framework for inferring the accurate motion-capture state when only the noisy depth sensor feature state is available in real time. The experimental results demonstrate that our framework is capable of learning consistent task-specific latent features using few data samples and has the ability to generalize to unseen environmental settings. We further present several factors that affect the predictive performance of the learned cloth-state model

    Data-efficient Learning of Robotic Clothing Assistance using Bayesian Gaussian Process Latent Variable Models

    Get PDF
    Motor-skill learning for complex robotic tasks is a challenging problem due to the high task variability. Robotic clothing assistance is one such challenging problem that can greatly improve the quality-of-life for the elderly and disabled. In this study, we propose a data-efficient representation to encode task-specific motor-skills of the robot using Bayesian nonparametric latent variable models. The effectivity of the proposed motor-skill representation is demonstrated in two ways: (1) through a real-time controller that can be used as a tool for learning from demonstration to impart novel skills to the robot and (2) by demonstrating that policy search reinforcement learning in such a task-specific latent space outperforms learning in the high-dimensional joint configuration space of the robot. We implement our proposed framework in a practical setting with a dual-arm robot performing clothing assistance tasks

    RGB-D カメラ オ モチイタ ヒカイゴシャ ト イフク ノ トポロジー カンケイ ノ スイテイチャク シエン ロボット ノ カイハツ ニ ムケテ

    No full text
    修士(Master)工学(Engineering)奈良先端科学技術大学院大学修第6253

    ロボット ニ ヨル チャクイ カイジョ ノ タメ ノンパラメトリックベイズ センザイ ヘンスウ モデル オ モチイタ データ コウリツテキ ガクシュウ

    No full text
    博第1473号甲第1473号博士(工学)奈良先端科学技術大学院大

    Data-efficient Skill Transfer for Robotics using Virtual Reality and Imitation Learning

    No full text
    講演者所属: Mathmatical Informatics Lab 講演場所: 情報科学棟大講義室L1Learning from Demonstration (LfD) is a paradigm where humans demonstrate the procedure to perform complex tasks which can be used to train autonomous agents. However, the performance of LfD is highly sensitive to the quality of demonstrations which in turn depends on the user-interface. In this presentation, we propose the use of Virtual Reality (VR) to develop an intuitive interface that enables users to provide good demonstrations. We apply this approach to the task of training a visual attention system which is a crucial component for tasks such as autonomous driving and human-robot interaction. We show that interaction time of few minutes is sufficient to train a deep neural network to successfully learn attention strategies
    corecore